Goto

Collaborating Authors

 achievable rate



Capacity-Net-Based RIS Precoding Design without Channel Estimation for mmWave MIMO System

Huang, Chun-Yuan, Chou, Po-Heng, Huang, Wan-Jen, Chien, Ying-Ren, Tsao, Yu

arXiv.org Artificial Intelligence

In this paper, we propose Capacity-Net, a novel unsupervised learning approach aimed at maximizing the achievable rate in reflecting intelligent surface (RIS)-aided millimeter-wave (mmWave) multiple input multiple output (MIMO) systems. To combat severe channel fading of the mmWave spectrum, we optimize the phase-shifting factors of the reflective elements in the RIS to enhance the achievable rate. However, most optimization algorithms rely heavily on complete and accurate channel state information (CSI), which is often challenging to acquire since the RIS is mostly composed of passive components. To circumvent this challenge, we leverage unsupervised learning techniques with implicit CSI provided by the received pilot signals. Specifically, it usually requires perfect CSI to evaluate the achievable rate as a performance metric of the current optimization result of the unsupervised learning method. Instead of channel estimation, the Capacity-Net is proposed to establish a mapping among the received pilot signals, optimized RIS phase shifts, and the resultant achievable rates. Simulation results demonstrate the superiority of the proposed Capacity-Net-based unsupervised learning approach over learning methods based on traditional channel estimation.


Deep Reinforcement Learning-Based Precoding for Multi-RIS-Aided Multiuser Downlink Systems with Practical Phase Shift

Chou, Po-Heng, Zheng, Bo-Ren, Huang, Wan-Jen, Saad, Walid, Tsao, Yu, Chang, Ronald Y.

arXiv.org Artificial Intelligence

This study considers multiple reconfigurable intelligent surfaces (RISs)-aided multiuser downlink systems with the goal of jointly optimizing the transmitter precoding and RIS phase shift matrix to maximize spectrum efficiency. Unlike prior work that assumed ideal RIS reflectivity, a practical coupling effect is considered between reflecting amplitude and phase shift for the RIS elements. This makes the optimization problem non-convex. To address this challenge, we propose a deep deterministic policy gradient (DDPG)-based deep reinforcement learning (DRL) framework. The proposed model is evaluated under both fixed and random numbers of users in practical mmWave channel settings. Simulation results demonstrate that, despite its complexity, the proposed DDPG approach significantly outperforms optimization-based algorithms and double deep Q-learning, particularly in scenarios with random user distributions.


Green Learning for STAR-RIS mmWave Systems with Implicit CSI

Huang, Yu-Hsiang, Chou, Po-Heng, Huang, Wan-Jen, Saad, Walid, Kuo, C. -C. Jay

arXiv.org Artificial Intelligence

In this paper, a green learning (GL)-based precoding framework is proposed for simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)-aided millimeter-wave (mmWave) MIMO broadcasting systems. Motivated by the growing emphasis on environmental sustainability in future 6G networks, this work adopts a broadcasting transmission architecture for scenarios where multiple users share identical information, improving spectral efficiency and reducing redundant transmissions and power consumption. Different from conventional optimization methods, such as block coordinate descent (BCD) that require perfect channel state information (CSI) and iterative computation, the proposed GL framework operates directly on received uplink pilot signals without explicit CSI estimation. Unlike deep learning (DL) approaches that require CSI-based labels for training, the proposed GL approach also avoids deep neural networks and backpropagation, leading to a more lightweight design. Although the proposed GL framework is trained with supervision generated by BCD under full CSI, inference is performed in a fully CSI-free manner. The proposed GL integrates subspace approximation with adjusted bias (Saab), relevant feature test (RFT)-based supervised feature selection, and eXtreme gradient boosting (XGBoost)-based decision learning to jointly predict the STAR-RIS coefficients and transmit precoder. Simulation results show that the proposed GL approach achieves competitive spectral efficiency compared to BCD and DL-based models, while reducing floating-point operations (FLOPs) by over four orders of magnitude. These advantages make the proposed GL approach highly suitable for real-time deployment in energy- and hardware-constrained broadcasting scenarios.



Handoff Design in User-Centric Cell-Free Massive MIMO Networks Using DRL

Ammar, Hussein A., Adve, Raviraj, Shahbazpanahi, Shahram, Boudreau, Gary, Bahceci, Israfil

arXiv.org Artificial Intelligence

--In the user-centric cell-free massive MIMO (UC-mMIMO) network scheme, user mobility necessitates updating the set of serving access points to maintain the user-centric clustering. Such updates are typically performed through handoff (HO) operations; however, frequent HOs lead to overheads associated with the allocation and release of resources. This paper presents a deep reinforcement learning (DRL)-based solution to predict and manage these connections for mobile users. Our solution employs the Soft Actor-Critic algorithm, with continuous action space representation, to train a deep neural network to serve as the HO policy. We present a novel proposition for a reward function that integrates a HO penalty in order to balance the attainable rate and the associated overhead related to HOs. We develop two variants of our system; the first one uses mobility direction-assisted (DA) observations that are based on the user movement pattern, while the second one uses history-assisted (HA) observations that are based on the history of the large-scale fading (LSF). Simulation results show that our DRL-based continuous action space approach is more scalable than discrete space counterpart, and that our derived HO policy automatically learns to gather HOs in specific time slots to minimize the overhead of initiating HOs. Our solution can also operate in real time with a response time less than 0 . Index T erms --Mobility, handoff, handover, user-centric, cell-free massive MIMO, distributed MIMO, deep-reinforcement learning, soft actor critic, machine learning, channel aging. User-centric cell-free massive MIMO (UC-mMIMO) is a wireless network architecture where each user is served by a custom group of neighboring access points (APs) which are connected to a central unit (CU) via fronthaul links [1]. Unlike the current cellular system that is based on macro base stations, UC-mMIMO deploys cooperative APs that jointly serve users without relying on a traditional cellular boundaries. UC-mMIMO helps to achieve reliable wireless connectivity and provides uniform performance throughout the network [1], [2]. However, this beyond-5G mobile wireless network architecture introduces the key challenge of determining the connections between the APs and the users when moving through the network [3].


Near-field Beamforming for Extremely Large-scale MIMO Based on Unsupervised Deep Learning

Nie, Jiali, Cui, Yuanhao, Yang, Zhaohui, Yuan, Weijie, Jing, Xiaojun

arXiv.org Artificial Intelligence

Extremely Large-scale Array (ELAA) is considered a frontier technology for future communication systems, pivotal in improving wireless systems' rate and spectral efficiency. However, as ELAA employs a multitude of antennas operating at higher frequencies, users are typically situated in the near-field region where the spherical wavefront propagates. This inevitably leads to a significant increase in the overhead of beam training, requiring complex two-dimensional beam searching in both the angle domain and the distance domain. To address this problem, we propose a near-field beamforming method based on unsupervised deep learning. Our convolutional neural network efficiently extracts complex channel state information features by strategically selecting padding and kernel size. We optimize the beamformers to maximize achievable rates in a multi-user network without relying on predefined custom codebooks. Upon deployment, the model requires solely the input of pre-estimated channel state information to derive the optimal beamforming vector. Simulation results show that our proposed scheme can obtain stable beamforming gain compared with the baseline scheme. Furthermore, owing to the inherent traits of deep learning methodologies, this approach substantially diminishes the beam training costs in near-field regions.


Rate-Distortion-Perception Theory for Semantic Communication

Chai, Jingxuan, Xiao, Yong, Shi, Guangming, Saad, Walid

arXiv.org Artificial Intelligence

Semantic communication has attracted significant interest recently due to its capability to meet the fast growing demand on user-defined and human-oriented communication services such as holographic communications, eXtended reality (XR), and human-to-machine interactions. Unfortunately, recent study suggests that the traditional Shannon information theory, focusing mainly on delivering semantic-agnostic symbols, will not be sufficient to investigate the semantic-level perceptual quality of the recovered messages at the receiver. In this paper, we study the achievable data rate of semantic communication under the symbol distortion and semantic perception constraints. Motivated by the fact that the semantic information generally involves rich intrinsic knowledge that cannot always be directly observed by the encoder, we consider a semantic information source that can only be indirectly sensed by the encoder. Both encoder and decoder can access to various types of side information that may be closely related to the user's communication preference. We derive the achievable region that characterizes the tradeoff among the data rate, symbol distortion, and semantic perception, which is then theoretically proved to be achievable by a stochastic coding scheme. We derive a closed-form achievable rate for binary semantic information source under any given distortion and perception constraints. We observe that there exists cases that the receiver can directly infer the semantic information source satisfying certain distortion and perception constraints without requiring any data communication from the transmitter. Experimental results based on the image semantic source signal have been presented to verify our theoretical observations.


Coding for the Gaussian Channel in the Finite Blocklength Regime Using a CNN-Autoencoder

Hesham, Nourhan, Bouzid, Mohamed, Abdel-Qader, Ahmad, Chaaban, Anas

arXiv.org Artificial Intelligence

The development of delay-sensitive applications that require ultra high reliability created an additional challenge for wireless networks. This led to Ultra-Reliable Low-Latency Communications, as a use case that 5G and beyond 5G systems must support. However, supporting low latency communications requires the use of short codes, while attaining vanishing frame error probability (FEP) requires long codes. Thus, developing codes for the finite blocklength regime (FBR) achieving certain reliability requirements is necessary. This paper investigates the potential of Convolutional Neural Networks autoencoders (CNN-AE) in approaching the theoretical maximum achievable rate over a Gaussian channel for a range of signal-to-noise ratios at a fixed blocklength and target FEP, which is a different perspective compared to existing works that explore the use of CNNs from bit-error and symbol-error rate perspectives. We explain the studied CNN-AE architecture, evaluate it numerically, and compare it to the theoretical maximum achievable rate and the achievable rates of polar coded quadrature amplitude modulation (QAM), Reed-Muller coded QAM, multilevel polar coded modulation, and a TurboAE-MOD scheme from the literature. Numerical results show that the CNN-AE outperforms these benchmark schemes and approaches the theoretical maximum rate, demonstrating the capability of CNN-AEs in learning good codes for delay-constrained applications.


Deep Learning for Hybrid Beamforming with Finite Feedback in GSM Aided mmWave MIMO Systems

Lu, Zhilin, Zhang, Xudong, Zeng, Rui, Wang, Jintao

arXiv.org Artificial Intelligence

Hybrid beamforming is widely recognized as an important technique for millimeter wave (mmWave) multiple input multiple output (MIMO) systems. Generalized spatial modulation (GSM) is further introduced to improve the spectrum efficiency. However, most of the existing works on beamforming assume the perfect channel state information (CSI), which is unrealistic in practical systems. In this paper, joint optimization of downlink pilot training, channel estimation, CSI feedback, and hybrid beamforming is considered in GSM aided frequency division duplexing (FDD) mmWave MIMO systems. With the help of deep learning, the GSM hybrid beamformers are designed via unsupervised learning in an end-to-end way. Experiments show that the proposed multi-resolution network named GsmEFBNet can reach a better achievable rate with fewer feedback bits compared with the conventional algorithm.